Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
PNAS Nexus ; 3(4): pgae164, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38689704

RESUMO

Optical fibers aim to image in vivo biological processes. In this context, high spatial resolution and stability to fiber movements are key to enable decision-making processes (e.g. for microendoscopy). Recently, a single-pixel imaging technique based on a multicore fiber photonic lantern has been designed, named computational optical imaging using a lantern (COIL). A proximal algorithm based on a sparsity prior, dubbed SARA-COIL, has been further proposed to solve the associated inverse problem, to enable image reconstructions for high resolution COIL microendoscopy. In this work, we develop a data-driven approach for COIL. We replace the sparsity prior in the proximal algorithm by a learned denoiser, leading to a plug-and-play (PnP) algorithm. The resulting PnP method, based on a proximal primal-dual algorithm, enables to solve the Morozov formulation of the inverse problem. We use recent results in learning theory to train a network with desirable Lipschitz properties, and we show that the resulting primal-dual PnP algorithm converges to a solution to a monotone inclusion problem. Our simulations highlight that the proposed data-driven approach improves the reconstruction quality over variational SARA-COIL method on both simulated and real data.

2.
JHEP Rep ; 6(3): 101008, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38379584

RESUMO

Background & Aims: The diagnosis of primary liver cancers (PLCs) can be challenging, especially on biopsies and for combined hepatocellular-cholangiocarcinoma (cHCC-CCA). We automatically classified PLCs on routine-stained biopsies using a weakly supervised learning method. Method: We selected 166 PLC biopsies divided into training, internal and external validation sets: 90, 29 and 47 samples, respectively. Two liver pathologists reviewed each whole-slide hematein eosin saffron (HES)-stained image (WSI). After annotating the tumour/non-tumour areas, tiles of 256x256 pixels were extracted from the WSIs and used to train a ResNet18 neural network. The tumour/non-tumour annotations served as labels during training, and the network's last convolutional layer was used to extract new tumour tile features. Without knowledge of the precise labels of the malignancies, we then applied an unsupervised clustering algorithm. Results: Pathological review classified the training and validation sets into hepatocellular carcinoma (HCC, 33/90, 11/29 and 26/47), intrahepatic cholangiocarcinoma (iCCA, 28/90, 9/29 and 15/47), and cHCC-CCA (29/90, 9/29 and 6/47). In the two-cluster model, Clusters 0 and 1 contained mainly HCC and iCCA histological features. The diagnostic agreement between the pathological diagnosis and the two-cluster model predictions (major contingent) in the internal and external validation sets was 100% (11/11) and 96% (25/26) for HCC and 78% (7/9) and 87% (13/15) for iCCA, respectively. For cHCC-CCA, we observed a highly variable proportion of tiles from each cluster (cluster 0: 5-97%; cluster 1: 2-94%). Conclusion: Our method applied to PLC HES biopsy could identify specific morphological features of HCC and iCCA. Although no specific features of cHCC-CCA were recognized, assessing the proportion of HCC and iCCA tiles within a slide could facilitate the identification of cHCC-CCA. Impact and implications: The diagnosis of primary liver cancers can be challenging, especially on biopsies and for combined hepatocellular-cholangiocarcinoma (cHCC-CCA). We automatically classified primary liver cancers on routine-stained biopsies using a weakly supervised learning method. Our model identified specific features of hepatocellular carcinoma and intrahepatic cholangiocarcinoma. Despite no specific features of cHCC-CCA being recognized, the identification of hepatocellular carcinoma and intrahepatic cholangiocarcinoma tiles within a slide could facilitate the diagnosis of primary liver cancers, and particularly cHCC-CCA.

3.
Eur Radiol ; 2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38388719

RESUMO

RATIONALE AND OBJECTIVES: Automated evaluation of abdominal computed tomography (CT) scans should help radiologists manage their massive workloads, thereby leading to earlier diagnoses and better patient outcomes. Our objective was to develop a machine-learning model capable of reliably identifying suspected bowel obstruction (BO) on abdominal CT. MATERIALS AND METHODS: The internal dataset comprised 1345 abdominal CTs obtained in 2015-2022 from 1273 patients with suspected BO; among them, 670 were annotated as BO yes/no by an experienced abdominal radiologist. The external dataset consisted of 88 radiologist-annotated CTs. We developed a full preprocessing pipeline for abdominal CT comprising a model to locate the abdominal-pelvic region and another model to crop the 3D scan around the body. We built, trained, and tested several neural-network architectures for the binary classification (BO, yes/no) of each CT. F1 and balanced accuracy scores were computed to assess model performance. RESULTS: The mixed convolutional network pretrained on a Kinetics 400 dataset achieved the best results: with the internal dataset, the F1 score was 0.92, balanced accuracy 0.86, and sensitivity 0.93; with the external dataset, the corresponding values were 0.89, 0.89, and 0.89. When calibrated on sensitivity, this model produced 1.00 sensitivity, 0.84 specificity, and an F1 score of 0.88 with the internal dataset; corresponding values were 0.98, 0.76, and 0.87 with the external dataset. CONCLUSION: The 3D mixed convolutional neural network developed here shows great potential for the automated binary classification (BO yes/no) of abdominal CT scans from patients with suspected BO. CLINICAL RELEVANCE STATEMENT: The 3D mixed CNN automates bowel obstruction classification, potentially automating patient selection and CT prioritization, leading to an enhanced radiologist workflow. KEY POINTS: • Bowel obstruction's rising incidence strains radiologists. AI can aid urgent CT readings. • Employed 1345 CT scans, neural networks for bowel obstruction detection, achieving high accuracy and sensitivity on external testing. • 3D mixed CNN automates CT reading prioritization effectively and speeds up bowel obstruction diagnosis.

4.
IEEE Trans Image Process ; 33: 134-148, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37988215

RESUMO

The optimization of prediction and update operators plays a prominent role in lifting-based image coding schemes. In this paper, we focus on learning the prediction and update models involved in a recent Fully Connected Neural Network (FCNN)-based lifting structure. While a straightforward approach consists in separately learning the different FCNN models by optimizing appropriate loss functions, jointly learning those models is a more challenging problem. To address this problem, we first consider a statistical model-based entropy loss function that yields a good approximation to the coding rate. Then, we develop a multi-scale optimization technique to learn all the FCNN models simultaneously. For this purpose, two loss functions defined across the different resolution levels of the proposed representation are investigated. While the first function combines standard prediction and update loss functions, the second one aims to obtain a good approximation to the rate-distortion criterion. Experimental results carried out on two standard image datasets, show the benefits of the proposed approaches in the context of lossy and lossless compression.

5.
Med Image Anal ; 77: 102341, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34998110

RESUMO

The reconstruction of a volumetric image from Digital Breast Tomosynthesis (DBT) measurements is an ill-posed inverse problem, for which existing iterative regularized approaches can provide a good solution. However, the clinical task is somehow omitted in the derivation of those techniques, although it plays a primary role in the radiologist diagnosis. In this work, we address this issue by introducing a novel variational formulation for DBT reconstruction, tailored for a specific clinical task, namely the detection of microcalcifications. Our method aims at simultaneously enhancing the detectability performance and enabling a high-quality restoration of the background breast tissues. Our contribution is threefold. First, we introduce an original task-based reconstruction framework through the proposition of a detectability function inspired from mathematical model observers. Second, we propose a novel total-variation regularizer where the gradient field accounts for the different morphological contents of the imaged breast. Third, we integrate the two developed measures into a cost function, minimized thanks to a new form of the Majorize Minimize Memory Gradient (3MG) algorithm. We conduct a numerical comparison of the convergence speed of the proposed method with those of standard convex optimization algorithms. Experimental results show the interest of our DBT reconstruction approach, qualitatively and quantitatively.


Assuntos
Neoplasias da Mama , Mamografia , Algoritmos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Mamografia/métodos , Modelos Teóricos , Imagens de Fantasmas
6.
IEEE Trans Image Process ; 31: 569-584, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34890328

RESUMO

Lifting-based wavelet transform has been extensively used for efficient compression of various types of visual data. Generally, the performance of such coding schemes strongly depends on the lifting operators used, namely the prediction and update filters. Unlike conventional schemes based on linear filters, we propose, in this paper, to learn these operators by exploiting neural networks. More precisely, a classical Fully Connected Neural Network (FCNN) architecture is firstly employed to perform the prediction and update. Then, we propose to improve this FCNN-based Lifting Scheme (LS) in order to better take into account the input image to be encoded. Thus, a novel dynamical FCNN model is developed, making the learning process adaptive to the input image contents for which two adaptive learning techniques are proposed. While the first one resorts to an iterative algorithm where the computation of two kinds of variables is performed in an alternating manner, the second learning method aims to learn the model parameters directly through a reformulation of the loss function. Experimental results carried out on various test images show the benefits of the proposed approaches in the context of lossy and lossless image compression.

7.
Artigo em Inglês | MEDLINE | ID: mdl-37015435

RESUMO

In this paper, we introduce a variational Bayesian algorithm (VBA) for image blind deconvolution. Our VBA generic framework incorporates smoothness priors on the unknown blur/image and possible affine constraints (e.g., sum to one) on the blur kernel, integrating the VBA within a neural network paradigm following an unrolling methodology. The proposed architecture is trained in a supervised fashion, which allows us to optimally set two key hyperparameters of the VBA model and leads to further improvements in terms of resulting visual quality. Various experiments involving grayscale/color images and diverse kernel shapes, are performed. The numerical examples illustrate the high performance of our approach when compared to state-of-the-art techniques based on optimization, Bayesian estimation, or deep learning.

8.
Med Phys ; 48(10): 6339-6361, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34423442

RESUMO

PURPOSE: Discretizing tomographic forward and backward operations is a crucial step in the design of model-based reconstruction algorithms. Standard projectors rely on linear interpolation, whose adjoint introduces discretization errors during backprojection. More advanced techniques are obtained through geometric footprint models that may present a high computational cost and an inner logic that is not suitable for implementation on massively parallel computing architectures. In this work, we take a fresh look at the discretization of resampling transforms and focus on the issue of magnification-induced local sampling variations by introducing a new magnification-driven interpolation approach for tomography. METHODS: Starting from the existing literature on spline interpolation for magnification purposes, we provide a mathematical formulation for discretizing a one-dimensional homography. We then extend our approach to two-dimensional representations in order to account for the geometry of cone-beam computed tomography with a flat panel detector. Our new method relies on the decomposition of signals onto a space generated by nonuniform B-splines so as to capture the spatially varying magnification that locally affects sampling. We propose various degrees of approximations for a rapid implementation of the proposed approach. Our framework allows us to define a novel family of projector/backprojector pairs parameterized by the order of the employed B-splines. The state-of-the-art distance-driven interpolation appears to fit into this family thus providing new insight and computational layout for this scheme. The question of data resampling at the detector level is handled and integrated with reconstruction in a single framework. RESULTS: Results on both synthetic data and real data using a quality assurance phantom, were performed to validate our approach. We show experimentally that our approximate implementations are associated with reduced complexity while achieving a near-optimal performance. In contrast with linear interpolation, B-splines guarantee full usage of all data samples, and thus the X-ray dose, leading to more uniform noise properties. In addition, higher-order B-splines allow analytical and iterative reconstruction to reach higher resolution. These benefits appear more significant when downsampling frames acquired by X-ray flat-panel detectors with small pixels. CONCLUSIONS: Magnification-driven B-spline interpolation is shown to provide high-accuracy projection operators with good-quality adjoints for iterative reconstruction. It equally applies to backprojection for analytical reconstruction and detector data downsampling.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia , Tomografia Computadorizada por Raios X
9.
Microsc Res Tech ; 84(7): 1553-1562, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33491837

RESUMO

We demonstrate the benefit of a novel laser strategy in multiphoton microscopy (MPM). The cheap, simple, and turn-key supercontinuum laser system with its spectral shaping module, constitutes an ideal approach for the one-shot microscopic imaging of many fluorophores without modification of the excitation parameters: central wavelength, spectral bandwidth, and average power. The polyvalence of the resulting multiplex-multiphoton microscopy (M-MPM) device is illustrated by images of many biomedical models from several origins (biological, medical, or vegetal), generated while keeping constant the spectral parameters of excitation. The resolution of the M-MPM device is quantified by a procedure of point-spread-function (PSF) assessment led by an original, robust, and reliable computational approach FIGARO. The estimated values for the PSF width for our M-MPM system are shown to be comparable to standard values found in optical microscopy. The simplification of the excitation system constitutes a significant instrumental progress in biomedical MPM, paving the way to the imaging of many fluorophores with a single shot of excitation without any modification of the lighting device. RESEARCH HIGHLIGHTS: A new solution of multiplex-multiphoton microscopy device is shown, resting on a supercontinuum laser. The one-shot excitation device has imaged biomedical and vegetal models. Our original computational strategy measures usual microscopy resolution.


Assuntos
Lasers , Microscopia de Fluorescência por Excitação Multifotônica , Corantes Fluorescentes , Luz
10.
Nat Commun ; 12(1): 634, 2021 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-33504775

RESUMO

The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach.


Assuntos
COVID-19/diagnóstico , COVID-19/fisiopatologia , Aprendizado Profundo , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Inteligência Artificial , COVID-19/classificação , Humanos , Modelos Biológicos , Análise Multivariada , Prognóstico , Radiologistas , Índice de Gravidade de Doença
11.
Entropy (Basel) ; 20(2)2018 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-33265201

RESUMO

In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC) algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis-Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems-namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations. Thus, the computational cost of each iteration of the Gibbs sampler is significantly reduced while ensuring good mixing properties.

12.
Artigo em Inglês | MEDLINE | ID: mdl-28368827

RESUMO

Discovering meaningful gene interactions is crucial for the identification of novel regulatory processes in cells. Building accurately the related graphs remains challenging due to the large number of possible solutions from available data. Nonetheless, enforcing a priori on the graph structure, such as modularity, may reduce network indeterminacy issues. BRANE Clust (Biologically-Related A priori Network Enhancement with Clustering) refines gene regulatory network (GRN) inference thanks to cluster information. It works as a post-processing tool for inference methods (i.e., CLR, GENIE3). In BRANE Clust, the clustering is based on the inversion of a system of linear equations involving a graph-Laplacian matrix promoting a modular structure. Our approach is validated on DREAM4 and DREAM5 datasets with objective measures, showing significant comparative improvements. We provide additional insights on the discovery of novel regulatory or co-expressed links in the inferred Escherichia coli network evaluated using the STRING database. The comparative pertinence of clustering is discussed computationally (SIMoNe, WGCNA, X-means) and biologically (RegulonDB). BRANE Clust software is available at: http://www-syscom.univ-mlv.fr/~pirayre/Codes-GRN-BRANE-clust.html.


Assuntos
Análise por Conglomerados , Biologia Computacional/métodos , Redes Reguladoras de Genes/genética , Algoritmos , Bases de Dados Genéticas , Escherichia coli/genética , Perfilação da Expressão Gênica , Software
13.
BMC Bioinformatics ; 16: 368, 2015 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-26537179

RESUMO

BACKGROUND: Inferring gene networks from high-throughput data constitutes an important step in the discovery of relevant regulatory relationships in organism cells. Despite the large number of available Gene Regulatory Network inference methods, the problem remains challenging: the underdetermination in the space of possible solutions requires additional constraints that incorporate a priori information on gene interactions. METHODS: Weighting all possible pairwise gene relationships by a probability of edge presence, we formulate the regulatory network inference as a discrete variational problem on graphs. We enforce biologically plausible coupling between groups and types of genes by minimizing an edge labeling functional coding for a priori structures. The optimization is carried out with Graph cuts, an approach popular in image processing and computer vision. We compare the inferred regulatory networks to results achieved by the mutual-information-based Context Likelihood of Relatedness (CLR) method and by the state-of-the-art GENIE3, winner of the DREAM4 multifactorial challenge. RESULTS: Our BRANE Cut approach infers more accurately the five DREAM4 in silico networks (with improvements from 6% to 11%). On a real Escherichia coli compendium, an improvement of 11.8% compared to CLR and 3% compared to GENIE3 is obtained in terms of Area Under Precision-Recall curve. Up to 48 additional verified interactions are obtained over GENIE3 for a given precision. On this dataset involving 4345 genes, our method achieves a performance similar to that of GENIE3, while being more than seven times faster. The BRANE Cut code is available at: http://www-syscom.univ-mlv.fr/~pirayre/Codes-GRN-BRANE-cut.html. CONCLUSIONS: BRANE Cut is a weighted graph thresholding method. Using biologically sound penalties and data-driven parameters, it improves three state-of-the art GRN inference methods. It is applicable as a generic network inference post-processing, due to its computational efficiency.


Assuntos
Algoritmos , Escherichia coli/genética , Redes Reguladoras de Genes , Área Sob a Curva , Simulação por Computador , Bases de Dados Genéticas , Reprodutibilidade dos Testes
14.
IEEE Trans Image Process ; 23(12): 5531-44, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25347882

RESUMO

Nonlocal total variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the structure tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the nonlocal variations, jointly for the different components, through various l(1, p)-matrix-norms with p ≥ 1. To facilitate the choice of the hyperparameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented because of the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for color, multispectral, and hyperspectral images. The results demonstrate the interest of introducing a nonlocal ST regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods, such as the alternating direction method of multipliers.

15.
MAGMA ; 27(6): 509-29, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24619431

RESUMO

BACKGROUND: Parallel magnetic resonance imaging (MRI) is a fast imaging technique that helps acquiring highly resolved images in space/time. Its performance depends on the reconstruction algorithm, which can proceed either in the k-space or in the image domain. OBJECTIVE AND METHODS: To improve the performance of the widely used SENSE algorithm, 2D regularization in the wavelet domain has been investigated. In this paper, we first extend this approach to 3D-wavelet representations and the 3D sparsity-promoting regularization term, in order to address reconstruction artifacts that propagate across adjacent slices. The resulting optimality criterion is convex but nonsmooth, and we resort to the parallel proximal algorithm to minimize it. Second, to account for temporal correlation between successive scans in functional MRI (fMRI), we extend our first contribution to 3D + t acquisition schemes by incorporating a prior along the time axis into the objective function. RESULTS: Our first method (3D-UWR-SENSE) is validated on T1-MRI anatomical data for gray/white matter segmentation. The second method (4D-UWR-SENSE) is validated for detecting evoked activity during a fast event-related functional MRI protocol. CONCLUSION: We show that our algorithm outperforms the SENSE reconstruction at the subject and group levels (15 subjects) for different contrasts of interest (motor or computation tasks) and two parallel acceleration factors (R = 2 and R = 4) on 2 × 2 × 3 MM(3) echo planar imaging (EPI) images.


Assuntos
Encéfalo/anatomia & histologia , Encéfalo/fisiopatologia , Potenciais Evocados/fisiologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Análise de Ondaletas , Algoritmos , Mapeamento Encefálico/métodos , Compressão de Dados/métodos , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador , Análise Espaço-Temporal
16.
IEEE Trans Image Process ; 23(1): 137-52, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24144661

RESUMO

In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Interpretação Estatística de Dados , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade
17.
IEEE Trans Image Process ; 20(9): 2450-62, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21421440

RESUMO

Regularization approaches have demonstrated their effectiveness for solving ill-posed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet-domain regularization brings other artefacts, e.g., ringing. However, a tradeoff can be made by introducing a hybrid regularization including several terms not necessarily acting in the same domain (e.g., spatial and wavelet transform domains). While this approach was shown to provide good results for solving deconvolution problems in the presence of additive Gaussian noise, an important issue is to efficiently deal with this hybrid regularization for more general noise models. To solve this problem, we adopt a convex optimization framework where the criterion to be minimized is split in the sum of more than two terms. For spatial domain regularization, isotropic or anisotropic total variation definitions using various gradient filters are considered. An accelerated version of the Parallel Proximal Algorithm is proposed to perform the minimization. Some difficulties in the computation of the proximity operators involved in this algorithm are also addressed in this paper. Numerical experiments performed in the context of Poisson data recovery, show the good behavior of the algorithm as well as promising results concerning the use of hybrid regularization techniques.

18.
Med Image Anal ; 15(2): 185-201, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21106436

RESUMO

To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990s as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired undersampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical ℓ(1) term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Análise de Ondaletas , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
IEEE Trans Image Process ; 18(11): 2463-75, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19586821

RESUMO

Many research efforts have been devoted to the improvement of stereo image coding techniques for storage or transmission. In this paper, we are mainly interested in lossy-to-lossless coding schemes for stereo images allowing progressive reconstruction. The most commonly used approaches for stereo compression are based on disparity compensation techniques. The basic principle involved in this technique first consists of estimating the disparity map. Then, one image is considered as a reference and the other is predicted in order to generate a residual image. In this paper, we propose a novel approach, based on vector lifting schemes (VLS), which offers the advantage of generating two compact multiresolution representations of the left and the right views. We present two versions of this new scheme. A theoretical analysis of the performance of the considered VLS is also conducted. Experimental results indicate a significant improvement using the proposed structures compared with conventional methods.

20.
IEEE Trans Image Process ; 18(4): 813-30, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19278920

RESUMO

Illumination changes cause serious problems in many computer vision applications. We present a new method for addressing robust depth estimation from a stereo pair under varying illumination conditions. First, a spatially varying multiplicative model is developed to account for brightness changes induced between left and right views. The depth estimation problem, based on this model, is then formulated as a constrained optimization problem in which an appropriate convex objective function is minimized under various convex constraints modelling prior knowledge and observed information. The resulting multiconstrained optimization problem is finally solved via a parallel block iterative algorithm which offers great flexibility in the incorporation of several constraints. Experimental results on both synthetic and real stereo pairs demonstrate the good performance of our method to efficiently recover depth and illumination variation fields, simultaneously.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...